Name | Version | Summary | date |
flowMC |
0.3.4 |
Normalizing flow exhanced sampler in jax |
2024-05-18 02:04:15 |
optimum-benchmark |
0.2.1 |
Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes. |
2024-05-17 09:25:34 |
fastinference-llm |
0.0.5 |
Seamlessly integrate with top LLM APIs for speedy, robust, and scalable querying. Ideal for developers needing quick, reliable AI-powered responses. |
2024-05-16 12:49:31 |
everai |
0.1.56 |
Client library to manage everai infrastructure |
2024-05-16 11:26:05 |
figaro |
1.6.6 |
FIGARO: Fast Inference for GW Astronomy, Research & Observations |
2024-05-14 11:43:49 |
optimum |
1.19.2 |
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. |
2024-05-09 11:10:15 |
triton-model-navigator |
0.9.0 |
Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton. |
2024-05-07 21:51:54 |
optimum-neuron |
0.0.22 |
Optimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Tranium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks. |
2024-05-07 16:51:04 |
ArtificialVision |
0.1.4 |
Artificial Vision Library |
2024-05-04 10:34:11 |
tritonclient |
2.45.0 |
Python client library and utilities for communicating with Triton Inference Server |
2024-04-30 22:43:02 |
triton-model-analyzer |
1.39.0 |
Triton Model Analyzer is a tool to profile and analyze the runtime performance of one or more models on the Triton Inference Server |
2024-04-30 21:25:39 |
finsim |
0.11.4 |
Financial simulation and inference |
2024-04-14 03:29:43 |
optimum-nvidia |
0.1.0b6 |
Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. " |
2024-04-11 21:13:38 |
friendli-client |
1.3.4 |
Client of Friendli Suite. |
2024-04-02 04:03:37 |
pydavid |
1.0.1 |
A simple Python interface of Open-David |
2024-03-29 16:49:06 |
azcausal |
0.2.2 |
Casual Inference |
2024-03-28 23:07:08 |
inference-server |
1.2.1 |
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image. |
2024-03-21 10:18:07 |
cofi |
0.2.8 |
Common Framework for Inference |
2024-03-21 05:45:31 |